71 research outputs found
Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model
The occurrence of sleep passed through the evolutionary sieve and is
widespread in animal species. Sleep is known to be beneficial to cognitive and
mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the
importance of the phenomenon, a complete understanding of its functions and
underlying mechanisms is still lacking. In this paper, we show interesting
effects of deep-sleep-like slow oscillation activity on a simplified
thalamo-cortical model which is trained to encode, retrieve and classify images
of handwritten digits. During slow oscillations,
spike-timing-dependent-plasticity (STDP) produces a differential homeostatic
process. It is characterized by both a specific unsupervised enhancement of
connections among groups of neurons associated to instances of the same class
(digit) and a simultaneous down-regulation of stronger synapses created by the
training. This hierarchical organization of post-sleep internal representations
favours higher performances in retrieval and classification tasks. The
mechanism is based on the interaction between top-down cortico-thalamic
predictions and bottom-up thalamo-cortical projections during deep-sleep-like
slow oscillations. Indeed, when learned patterns are replayed during sleep,
cortico-thalamo-cortical connections favour the activation of other neurons
coding for similar thalamic inputs, promoting their association. Such mechanism
hints at possible applications to artificial learning systems.Comment: 11 pages, 5 figures, v5 is the final version published on Scientific
Reports journa
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
The Brain on Low Power Architectures - Efficient Simulation of Cortical Slow Waves and Asynchronous States
Efficient brain simulation is a scientific grand challenge, a
parallel/distributed coding challenge and a source of requirements and
suggestions for future computing architectures. Indeed, the human brain
includes about 10^15 synapses and 10^11 neurons activated at a mean rate of
several Hz. Full brain simulation poses Exascale challenges even if simulated
at the highest abstraction level. The WaveScalES experiment in the Human Brain
Project (HBP) has the goal of matching experimental measures and simulations of
slow waves during deep-sleep and anesthesia and the transition to other brain
states. The focus is the development of dedicated large-scale
parallel/distributed simulation technologies. The ExaNeSt project designs an
ARM-based, low-power HPC architecture scalable to million of cores, developing
a dedicated scalable interconnect system, and SWA/AW simulations are included
among the driving benchmarks. At the joint between both projects is the INFN
proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation
engine. DPSNN can be configured to stress either the networking or the
computation features available on the execution platforms. The simulation
stresses the networking component when the neural net - composed by a
relatively low number of neurons, each one projecting thousands of synapses -
is distributed over a large number of hardware cores. When growing the number
of neurons per core, the computation starts to be the dominating component for
short range connections. This paper reports about preliminary performance
results obtained on an ARM-based HPC prototype developed in the framework of
the ExaNeSt project. Furthermore, a comparison is given of instantaneous power,
total energy consumption, execution time and energetic cost per synaptic event
of SWA/AW DPSNN simulations when executed on either ARM- or Intel-based server
platforms
Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes
There is mounting experimental evidence that brain-state specific neural
mechanisms supported by connectomic architectures serve to combine past and
contextual knowledge with current, incoming flow of evidence (e.g. from sensory
systems). Such mechanisms are distributed across multiple spatial and temporal
scales and require dedicated support at the levels of individual neurons and
synapses. A prominent feature in the neocortex is the structure of large, deep
pyramidal neurons which show a peculiar separation between an apical dendritic
compartment and a basal dentritic/peri-somatic compartment, with distinctive
patterns of incoming connections and brain-state specific activation
mechanisms, namely apical-amplification, -isolation and -drive associated to
the wakefulness, deeper NREM sleep stages and REM sleep. The cognitive roles of
apical mechanisms have been demonstrated in behaving animals. In contrast,
classical models of learning spiking networks are based on single compartment
neurons that miss the description of mechanisms to combine apical and
basal/somatic information. This work aims to provide the computational
community with a two-compartment spiking neuron model which includes features
that are essential for supporting brain-state specific learning and with a
piece-wise linear transfer function (ThetaPlanes) at highest abstraction level
to be used in large scale bio-inspired artificial intelligence systems. A
machine learning algorithm, constrained by a set of fitness functions, selected
the parameters defining neurons expressing the desired apical mechanisms.Comment: 19 pages, 38 figures, pape
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Gaussian and exponential lateral connectivity on distributed spiking neural network simulation
We measured the impact of long-range exponentially decaying intra-areal
lateral connectivity on the scaling and memory occupation of a distributed
spiking neural network simulator compared to that of short-range Gaussian
decays. While previous studies adopted short-range connectivity, recent
experimental neurosciences studies are pointing out the role of longer-range
intra-areal connectivity with implications on neural simulation platforms.
Two-dimensional grids of cortical columns composed by up to 11 M point-like
spiking neurons with spike frequency adaption were connected by up to 30 G
synapses using short- and long-range connectivity models. The MPI processes
composing the distributed simulator were run on up to 1024 hardware cores,
hosted on a 64 nodes server platform. The hardware platform was a cluster of
IBM NX360 M5 16-core compute nodes, each one containing two Intel Xeon Haswell
8-core E5-2630 v3 processors, with a clock of 2.40 G Hz, interconnected through
an InfiniBand network, equipped with 4x QDR switches.Comment: 9 pages, 9 figures, added reference to final peer reviewed version on
conference paper and DO
- …